robotic arm
Haptic-based Complementary Filter for Rigid Body Rotations
Kumar, Amit, Campolo, Domenico, Banavar, Ravi N.
The non-commutative nature of 3D rotations poses well-known challenges in generalizing planar problems to three-dimensional ones, even more so in contact-rich tasks where haptic information (i.e., forces/torques) is involved. In this sense, not all learning-based algorithms that are currently available generalize to 3D orientation estimation. Non-linear filters defined on $\mathbf{\mathbb{SO}(3)}$ are widely used with inertial measurement sensors; however, none of them have been used with haptic measurements. This paper presents a unique complementary filtering framework that interprets the geometric shape of objects in the form of superquadrics, exploits the symmetry of $\mathbf{\mathbb{SO}(3)}$, and uses force and vision sensors as measurements to provide an estimate of orientation. The framework's robustness and almost global stability are substantiated by a set of experiments on a dual-arm robotic setup.
- North America > United States > New York > Richmond County > New York City (0.04)
- North America > United States > New York > Queens County > New York City (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (4 more...)
See-Control: A Multimodal Agent Framework for Smartphone Interaction with a Robotic Arm
Zhao, Haoyu, Ding, Weizhong, Yang, Yuhao, Tian, Zheng, Yang, Linyi, Shao, Kun, Wang, Jun
Recent advances in Multimodal Large Language Models (MLLMs) have enabled their use as intelligent agents for smartphone operation. However, existing methods depend on the Android Debug Bridge (ADB) for data transmission and action execution, limiting their applicability to Android devices. In this work, we introduce the novel Embodied Smartphone Operation (ESO) task and present See-Control, a framework that enables smartphone operation via direct physical interaction with a low-DoF robotic arm, offering a platform-agnostic solution. See-Control comprises three key components: (1) an ESO benchmark with 155 tasks and corresponding evaluation metrics; (2) an MLLM-based embodied agent that generates robotic control commands without requiring ADB or system back-end access; and (3) a richly annotated dataset of operation episodes, offering valuable resources for future research. By bridging the gap between digital agents and the physical world, See-Control provides a concrete step toward enabling home robots to perform smartphone-dependent tasks in realistic environments.
- Europe > United Kingdom > England > Greater London > London (0.50)
- Europe > Spain (0.04)
- Europe > Italy > Veneto > Venice (0.04)
- (5 more...)
- Information Technology > Services (1.00)
- Consumer Products & Services (1.00)
- Leisure & Entertainment > Sports (0.93)
- Media > Music (0.68)
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
Closed-Loop Robotic Manipulation of Transparent Substrates for Self-Driving Laboratories using Deep Learning Micro-Error Correction
Fontenot, Kelsey, Gorti, Anjali, Goel, Iva, Buonassisi, Tonio, Siemenn, Alexander E.
Self-driving laboratories (SDLs) have accelerated the throughput and automation capabilities for discovering and improving chemistries and materials. Although these SDLs have automated many of the steps required to conduct chemical and materials experiments, a commonly overlooked step in the automation pipeline is the handling and reloading of substrates used to transfer or deposit materials onto for downstream characterization. Here, we develop a closed-loop method of Automated Substrate Handling and Exchange (ASHE) using robotics, dual-actuated dispensers, and deep learning-driven computer vision to detect and correct errors in the manipulation of fragile and transparent substrates for SDLs. Using ASHE, we demonstrate a 98.5% first-time placement accuracy across 130 independent trials of reloading transparent glass substrates into an SDL, where only two substrate misplacements occurred and were successfully detected as errors and automatically corrected. Through the development of more accurate and reliable methods for handling various types of substrates, we move toward an improvement in the automation capabilities of self-driving laboratories, furthering the acceleration of novel chemical and materials discoveries.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Asia > Singapore (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Human-Robot Collaboration for the Remote Control of Mobile Humanoid Robots with Torso-Arm Coordination
Boguslavskii, Nikita, Genua, Lorena Maria, Li, Zhi
Personal use of this material is permitted. Abstract -- Recently, many humanoid robots have been increasingly deployed in various facilities, including hospitals and assisted living environments, where they are often remotely controlled by human operators. Their kinematic redundancy enhances reachability and manipulability, enabling them to navigate complex, cluttered environments and perform a wide range of tasks. However, this redundancy also presents significant control challenges, particularly in coordinating the movements of the robot's macro-micro structure (torso and arms). Therefore, we propose various human-robot collaborative (HRC) methods for coordinating the torso and arm of remotely controlled mobile humanoid robots, aiming to balance autonomy and human input to enhance system efficiency and task execution. The proposed methods include human-initiated approaches, where users manually control torso movements, and robot-initiated approaches, which autonomously coordinate torso and arm based on factors such as reachability, task goal, or inferred human intent. We conducted a user study with N=17 participants to compare the proposed approaches in terms of task performance, manipulability, and energy efficiency, and analyzed which methods were preferred by participants. Human-robot collaborative (HRC) control enables humans and robot autonomy to complement each other and improve overall robotic manipulation performance.
- Research Report > New Finding (0.49)
- Research Report > Experimental Study (0.49)
Translating Cultural Choreography from Humanoid Forms to Robotic Arm
Chen, Chelsea-Xi, Zhang, Zhe, Zhou, Aven-Le
Robotic arm choreography often reproduces trajectories while missing cultural semantics. This study examines whether symbolic posture transfer with joint space compatible notation can preserve semantic fidelity on a six-degree-of-freedom arm and remain portable across morphologies. We implement ROPERA, a three-stage pipeline for encoding culturally codified postures, composing symbolic sequences, and decoding to servo commands. A scene from Kunqu opera, \textit{The Peony Pavilion}, serves as the material for evaluation. The procedure includes corpus-based posture selection, symbolic scoring, direct joint angle execution, and a visual layer with light painting and costume-informed colors. Results indicate reproducible execution with intended timing and cultural legibility reported by experts and audiences. The study points to non-anthropocentric cultural preservation and portable authoring workflows. Future work will design dance-informed transition profiles, extend the notation to locomotion with haptic, musical, and spatial cues, and test portability across platforms.
- Asia > China > Shaanxi Province > Xi'an (0.05)
- Asia > China > Shanghai > Shanghai (0.04)
- North America > Canada > Nova Scotia > Halifax Regional Municipality > Halifax (0.04)
- (3 more...)
The Download: AI-powered warfare, and how embryo care is changing
Plus: Why other industries are keeping such a close eye on Big Tech's job cuts It is July 2027, and China is on the brink of invading Taiwan. Autonomous drones with AI targeting capabilities are primed to overpower the island's air defenses as a series of crippling AI-generated cyberattacks cut off energy supplies and key communications. In the meantime, a vast disinformation campaign enacted by an AI-powered pro-Chinese meme farm spreads across global social media, deadening the outcry at Beijing's act of aggression. Scenarios such as this have brought dystopian horror to the debate about the use of AI in warfare. Military commanders hope for a digitally enhanced force that is faster and more accurate than human-directed combat. But there are fears that as AI assumes an increasingly central role, these same commanders will lose control of a conflict that escalates too quickly and lacks ethical or legal oversight.
- Asia > Taiwan (0.25)
- Asia > China > Beijing > Beijing (0.25)
- North America > United States > Massachusetts (0.05)
- (4 more...)
- Media (1.00)
- Government > Military (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.49)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.30)
AI-Driven Robotics for Optics
Uddin, Shiekh Zia, Vaidya, Sachin, Choudhary, Shrish, Chen, Zhuo, Salib, Raafat K., Huang, Luke, Englund, Dirk R., Soljačić, Marin
Optics is foundational to research in many areas of science and engineering, including nanophotonics, quantum information, materials science, biomedical imaging, and metrology. However, the design, assembly, and alignment of optical experiments remain predominantly manual, limiting throughput and reproducibility. Automating such experiments is challenging due to the strict, non-negotiable precision requirements and the diversity of optical configurations found in typical laboratories. Here, we introduce a platform that integrates generative artificial intelligence, computer vision, and robotics to automate free-space optical experiments. The platform translates user-defined goals into valid optical configurations, assembles them using a robotic arm, and performs micrometer-scale fine alignment using a robot-deployable tool. It then executes a range of automated measurements, including beam characterization, polarization mapping, and spectroscopy, with consistency surpassing that of human operators. This work demonstrates the first flexible, AI-driven automation platform for optics, offering a path towards remote operation, cloud labs, and high-throughput discovery in the optical sciences.
- Law > Intellectual Property & Technology Law (0.46)
- Government (0.46)
- Health & Medicine (0.34)
A Reward Net Algorithm
In this section, we present the detailed procedures of MRN in Algorithm 1. In Section 4.2, the implicit derivative at iteration k of is calculated by: g Cauchy-Schwarz inequality, and the last inequality holds for the definition of Lipschitz smoothness. Lemma 2. Assume the outer loss Then the gradient of with respect to the outer loss is Lipschitz continuous. Theorem 1. Assume the outer loss Theorem 2. Assume the outer loss Even worse, it might be difficult for human experts to give preferences to trajectory pairs (e.g., a pair of poor trajectories.). This problem leads to a significant impact on the efficiency of the feedback in the initial stage.
MLM: Learning Multi-task Loco-Manipulation Whole-Body Control for Quadruped Robot with Arm
Liu, Xin, Ma, Bida, Qi, Chenkun, Ding, Yan, Xu, Nuo, Zhaxizhuoma, null, Zhang, Guorong, Chen, Pengan, Liu, Kehui, Jia, Zhongjie, Guan, Chuyue, Mo, Yule, Liu, Jiaqi, Gao, Feng, Zhong, Jiangwei, Zhao, Bin, Li, Xuelong
Whole-body loco-manipulation for quadruped robots with arms remains a challenging problem, particularly in achieving multi-task control. To address this, we propose MLM, a reinforcement learning framework driven by both real-world and simulation data. It enables a six-DoF robotic arm-equipped quadruped robot to perform whole-body loco-manipulation for multiple tasks autonomously or under human teleoperation. To address the problem of balancing multiple tasks during the learning of loco-manipulation, we introduce a trajectory library with an adaptive, curriculum-based sampling mechanism. This approach allows the policy to efficiently leverage real-world collected trajectories for learning multi-task loco-manipulation. To address deployment scenarios with only historical observations and to enhance the performance of policy execution across tasks with different spatial ranges, we propose a Trajectory-Velocity Prediction policy network. It predicts unobservable future trajectories and velocities. By leveraging extensive simulation data and curriculum-based rewards, our controller achieves whole-body behaviors in simulation and zero-shot transfer to real-world deployment. Ablation studies in simulation verify the necessity and effectiveness of our approach, while real-world experiments on a Go2 robot with an Airbot robotic arm demonstrate the policy's good performance in multi-task execution.
- Asia > China > Shanghai > Shanghai (0.05)
- North America > United States > New York > Broome County > Binghamton (0.04)
Intuitive control of supernumerary robotic limbs through a tactile-encoded neural interface
Jia, Tianyu, Yang, Xingchen, McGeady, Ciaran, Li, Yifeng, Lin, Jinzhi, Ho, Kit San, Pan, Feiyu, Ji, Linhong, Li, Chong, Farina, Dario
These authors contributed equally to this work . Abstract: Brain - computer interfaces (BCIs) promise to extend human movement capabilities by enabling direct neural control of supernumerary effectors, yet integrating augmented commands with multi ple degrees of freedom without disrupting natural movement remains a k ey challenge. Here, we propose a tactile - encoded BCI that leverages sensory afferents through a novel tactile - evoked P300 paradigm, allowing intuitive and reliable decoding of supernumerary motor intentions even when superimposed with voluntary actions. The interface was evaluated in a multi - day experiment comprising of a single motor recognition task to validate baseline BCI performance and a dual task paradigm to assess the potential influence between the BCI and natural human movement . T he brain interface achieved real - time and reliable decoding of four supernumerary degrees of freedom, with significant performance improvement s after only three days of training. Importantly, after training, performance did not differ significantly b etween the single - and dual - BCI task conditions, and natural movement remained unimpaired during concurrent supernumerary control . Lastly, the interface was deployed in a movement augmentation task, demonstrating its ability to command two supernumerary robotic arms for functional assistance during bimanual tasks. These results establish a new neural interface paradigm for movement augmentation through stimulation of sensory afferents, expanding motor degrees of fr eedom without impairing natural movement . One - Sentence Summary: T actile - encoded neural interface enables intuitive control of supernumerary limbs without compromising natural human movement Main Text: INTRODUCTION Humans interact with their surroundings with remarkable dexterity and efficiency. Recent advances in robotics and neural interfaces hold the potential to increase these capabilities, enhancing human movement beyond its natural limits. Movement augmentation aims to increase the mechanical degrees of freedom (DoFs) an individual can exert over their surroundings ( 1), allowing movement tasks to be performed more efficiently or enable actions otherwise impossible with natural limbs alone, such as trimanual manipulation with a third arm ( 2) . A central challenge, however, lies in achieving practical control of supernumerary effectors (SEs) without compromising natural movement. Current strategies for augmenting DoFs often rely on augmentation by transfer, in which control of SEs is derived from the function of an existing body part, typically one that is task - irrelevant ( 1, 3, 4) .
- North America > United States > Florida > Seminole County > Casselberry (0.04)
- Asia > China > Beijing > Beijing (0.04)
- North America > United States > Massachusetts > Middlesex County > Natick (0.04)
- (4 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.46)
- Information Technology > Artificial Intelligence > Cognitive Science > Neuroscience (0.36)